IEEE TNN A172Rev K Nearest Neighbours Directed Noise Injection in Multilayer Perceptron Training

نویسنده

  • M. Skurichina
چکیده

IEEE TNN A172Rev K Nearest Neighbours Directed Noise Injection in Multilayer Perceptron Training M. Skurichina1, .Raudys2 and R.P.W. Duin1 1Pattern Recognition Group, Department of Applied Physics, Delft University of Technology, P.O. Box 5046, 2600GA Delft, The Netherlands. E-mail: [email protected], [email protected] 2Department of Data Analysis, Institute of Mathematics and Informatics, Akademijos 4, Vilnius 2600, Lithuania. Email: [email protected] Abstract The relation between classifier complexity and learning set size is very important in discriminant analysis. One of the ways to overcome the complexity control problem is to add noise to the training objects, increasing in this way the size of the training set. Both, the amount and the directions of noise injection are important factors which determine the effectiveness for classifier training. In this paper the effect is studied of the injection of Gaussian spherical noise and k nearest neighbours directed noise on the performance of multilayer perceptrons. As it is impossible to provide an analytical investigation for multilayer perceptrons, a theoretical analysis is made for statistical classifiers. The goal is to get a better understanding of the effect of noise injection on the accuracy of sample based classifiers. By both, empirical as well as theoretical studies, it is shown that the k nearest neighbours directed noise injection is preferable over the Gaussian spherical noise injection for data with low intrinsic dimensionality.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

K-Nearest Neighbours Directed Noise Injection in Multilayer Perceptron Training

Training M. Skurichina1, .Raudys2 and R.P.W. Duin1 1Pattern Recognition Group, Department of Applied Physics, Delft University of Technology, P.O. Box 5046, 2600GA Delft, The Netherlands. E-mail: [email protected], [email protected] 2Department of Data Analysis, Institute of Mathematics and Informatics, Akademijos 4, Vilnius 2600, Lithuania. Email: [email protected] Abstract T...

متن کامل

K-nearest Neighbors Directed Noise Injection in Multilayer Perceptron Training

The relation between classifier complexity and learning set size is very important in discriminant analysis. One of the ways to overcome the complexity control problem is to add noise to the training objects, increasing in this way the size of the training set. Both the amount and the directions of noise injection are important factors which determine the effectiveness for classifier training. ...

متن کامل

Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training

We analyze the effects of analog noise on the synaptic arithmetic during multilayer perceptron training, by expanding the cost function to include noise-mediated terms. Predictions are made in the light of these calculations that suggest that fault tolerance, training quality and training trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct cla...

متن کامل

Deterministic nonmonotone strategies for effective training of multilayer perceptrons

We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmonotone criterion with respect to the maximum error function value of the M previous epochs, and we propose a subpr...

متن کامل

Generalized neural trees for pattern classification

In this paper, a new neural tree (NT) model, the generalized NT (GNT), is presented. The main novelty of the GNT consists in the definition of a new training rule that performs an overall optimization of the tree. Each time the tree is increased by a new level, the whole tree is reevaluated. The training rule uses a weight correction strategy that takes into account the entire tree structure, a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000